28 research outputs found

    On Learning the Invisible in Photoacoustic Tomography with Flat Directionally Sensitive Detector

    Get PDF
    In photoacoustic tomography (PAT) with flat sensor, we routinely encounter two types of limited data. The first is due to using a finite sensor and is especially perceptible if the region of interest is large relatively to the sensor or located farther away from the sensor. In this paper, we focus on the second type caused by a varying sensitivity of the sensor to the incoming wavefront direction which can be modelled as binary i.e. by a cone of sensitivity. Such visibility conditions result, in Fourier domain, in a restriction of both the image and the data to a bowtie, akin to the one corresponding to the range of the forward operator. The visible ranges, in image and data domains, are related by the wavefront direction mapping. We adapt the wedge restricted Curvelet decomposition, we previously proposed for the representation of the full PAT data, to separate the visible and invisible wavefronts in the image. We optimally combine fast approximate operators with tailored deep neural network architectures into efficient learned reconstruction methods which perform reconstruction of the visible coefficients and the invisible coefficients are learned from a training set of similar data.Comment: Submitted to SIAM Journal on Imaging Science

    Learned Interferometric Imaging for the SPIDER Instrument

    Full text link
    The Segmented Planar Imaging Detector for Electro-Optical Reconnaissance (SPIDER) is an optical interferometric imaging device that aims to offer an alternative to the large space telescope designs of today with reduced size, weight and power consumption. This is achieved through interferometric imaging. State-of-the-art methods for reconstructing images from interferometric measurements adopt proximal optimization techniques, which are computationally expensive and require handcrafted priors. In this work we present two data-driven approaches for reconstructing images from measurements made by the SPIDER instrument. These approaches use deep learning to learn prior information from training data, increasing the reconstruction quality, and significantly reducing the computation time required to recover images by orders of magnitude. Reconstruction time is reduced to ∌10{\sim} 10 milliseconds, opening up the possibility of real-time imaging with SPIDER for the first time. Furthermore, we show that these methods can also be applied in domains where training data is scarce, such as astronomical imaging, by leveraging transfer learning from domains where plenty of training data are available.Comment: 21 pages, 14 figure

    On the Adjoint Operator in Photoacoustic Tomography

    Get PDF
    Photoacoustic Tomography (PAT) is an emerging biomedical "imaging from coupled physics" technique, in which the image contrast is due to optical absorption, but the information is carried to the surface of the tissue as ultrasound pulses. Many algorithms and formulae for PAT image reconstruction have been proposed for the case when a complete data set is available. In many practical imaging scenarios, however, it is not possible to obtain the full data, or the data may be sub-sampled for faster data acquisition. In such cases, image reconstruction algorithms that can incorporate prior knowledge to ameliorate the loss of data are required. Hence, recently there has been an increased interest in using variational image reconstruction. A crucial ingredient for the application of these techniques is the adjoint of the PAT forward operator, which is described in this article from physical, theoretical and numerical perspectives. First, a simple mathematical derivation of the adjoint of the PAT forward operator in the continuous framework is presented. Then, an efficient numerical implementation of the adjoint using a k-space time domain wave propagation model is described and illustrated in the context of variational PAT image reconstruction, on both 2D and 3D examples including inhomogeneous sound speed. The principal advantage of this analytical adjoint over an algebraic adjoint (obtained by taking the direct adjoint of the particular numerical forward scheme used) is that it can be implemented using currently available fast wave propagation solvers.Comment: submitted to "Inverse Problems

    Restarting projection methods for rational eigenproblems arising in fluid‐solid vibrations

    Get PDF
    For nonlinear eigenvalue problems T(λ)x = 0 satisfying a minmax characterization of its eigenvalues iterative projection methods combined with safeguarded iteration are suitable for computing all eigenvalues in a given interval. Such methods hit their limitations if a large number of eigenvalues is required. In this paper we discuss restart procedures which are able to cope with this problem, and we evaluate them for a rational eigenvalue problem governing vibrations of a fluid‐solid structure. First Published Online: 14 Oct 201

    Choose your path wisely: gradient descent in a Bregman distance framework

    Get PDF
    We propose an extension of a special form of gradient descent --- in the literature known as linearised Bregman iteration -- to a larger class of non-convex functions. We replace the classical (squared) two norm metric in the gradient descent setting with a generalised Bregman distance, based on a proper, convex and lower semi-continuous function. The algorithm's global convergence is proven for functions that satisfy the Kurdyka-\L ojasiewicz property. Examples illustrate that features of different scale are being introduced throughout the iteration, transitioning from coarse to fine. This coarse-to-fine approach with respect to scale allows to recover solutions of non-convex optimisation problems that are superior to those obtained with conventional gradient descent, or even projected and proximal gradient descent. The effectiveness of the linearised Bregman iteration in combination with early stopping is illustrated for the applications of parallel magnetic resonance imaging, blind deconvolution as well as image classification with neural networks

    Gradient descent in a generalised Bregman distance framework

    Get PDF
    We discuss a special form of gradient descent that in the literature has become known as the so-called linearised Bregman iteration. The idea is to replace the classical (squared) two norm metric in the gradient descent setting with a generalised Bregman distance, based on a more general proper, convex and lower semi-continuous functional. Gradient descent as well as the entropic mirror descent by Nemirovsky and Yudin are special cases, as is a specific form of non-linear Landweber iteration introduced by Bachmayr and Burger. We are going to analyse the linearised Bregman iteration in a setting where the functional we want to minimise is neither necessarily Lipschitz-continuous (in the classical sense) nor necessarily convex, and establish a global convergence result under the additional assumption that the functional we wish to minimise satisfies the so-called Kurdyka-{\L}ojasiewicz property.Comment: Conference proceedings of '2016 Geometric Numerical Integration and its Applications Maths Conference at La Trobe University, Melbourne Australia', MI Lecture Notes series of Kyushu University, six pages, one figure, program code: https://doi.org/10.17863/CAM.671
    corecore